10 research outputs found

    Classification-based prediction of effective connectivity between timeseries with a realistic cortical network model

    Get PDF
    Effective connectivity measures the pattern of causal interactions between brain regions. Traditionally, these patterns of causality are inferred from brain recordings using either non-parametric, i.e., model-free, or parametric, i.e., model-based, approaches. The latter approaches, when based on biophysically plausible models, have the advantage that they may facilitate the interpretation of causality in terms of underlying neural mechanisms. Recent biophysically plausible neural network models of recurrent microcircuits have shown the ability to reproduce well the characteristics of real neural activity and can be applied to model interacting cortical circuits. Unfortunately, however, it is challenging to invert these models in order to estimate effective connectivity from observed data. Here, we propose to use a classification-based method to approximate the result of such complex model inversion. The classifier predicts the pattern of causal interactions given a multivariate timeseries as input. The classifier is trained on a large number of pairs of multivariate timeseries and the respective pattern of causal interactions, which are generated by simulation from the neural network model. In simulated experiments, we show that the proposed method is much more accurate in detecting the causal structure of timeseries than current best practice methods. Additionally, we present further results to characterize the validity of the neural network model and the ability of the classifier to adapt to the generative model of the data

    User interface for haptic interaction with volume data

    No full text
    Katedra softwaru a výuky informatikyDepartment of Software and Computer Science EducationMatematicko-fyzikální fakultaFaculty of Mathematics and Physic

    Determining what information is transmitted across neural populations

    Get PDF
    Quantifying the amount of information communicated between neural population is crucial to understand brain dynamics. To address this question, many tools for the analysis of time series of neural activity, such as Granger causality, Transfer Entropy, Directed Information have been proposed. However, none of these popular model-free measures can reveal what information has been exchanged. Yet, understanding what information is exchanged is key to be able to infer, from brain recordings, the nature and the mechanisms of brain computation. To provide the mathematical tools needed to address this issue, we developed a new measure, exploiting benefits of novel Partial Information Decomposition framework, that determines how much information about each specific stimulus or task feature has been transferred between two neuronal populations. We tested this methodology on simulated neural data and showed that it captures the specific information being transmitted very well, and it is also highly robust to several of the confounds that have proven to be problematic for previous methods. Moreover, the measure was significantly better in detection of the temporal evolution of the information transfer and the directionality of it than the previous measures. We also applied the measure to an EEG dataset acquired during a face detection task that revealed interesting patterns of interhemispheric phase-specific information transfer. We finally analyzed high gamma activity in an MEG dataset of a visuomotor associations. Our measure allowed for tracing of the stimulus information flow and it confirmed the notion that dorsal fronto-parietal network is crucial for the visuomotor computations transforming visual information into motor plans. Altogether our work suggests that our new measure has potential to uncover previously hidden specific information transfer dynamics in neural communication

    Classification-Based Prediction of Effective Connectivity Between Timeseries With a Realistic Cortical Network Model

    No full text
    Effective connectivity measures the pattern of causal interactions between brain regions. Traditionally, these patterns of causality are inferred from brain recordings using either non-parametric, i.e., model-free, or parametric, i.e., model-based, approaches. The latter approaches, when based on biophysically plausible models, have the advantage that they may facilitate the interpretation of causality in terms of underlying neural mechanisms. Recent biophysically plausible neural network models of recurrent microcircuits have shown the ability to reproduce well the characteristics of real neural activity and can be applied to model interacting cortical circuits. Unfortunately, however, it is challenging to invert these models in order to estimate effective connectivity from observed data. Here, we propose to use a classification-based method to approximate the result of such complex model inversion. The classifier predicts the pattern of causal interactions given a multivariate timeseries as input. The classifier is trained on a large number of pairs of multivariate timeseries and the respective pattern of causal interactions, which are generated by simulation from the neural network model. In simulated experiments, we show that the proposed method is much more accurate in detecting the causal structure of timeseries than current best practice methods. Additionally, we present further results to characterize the validity of the neural network model and the ability of the classifier to adapt to the generative model of the data

    An information-theoretic quantification of the content of communication between brain regions

    No full text
    Abstract Quantifying the amount, content and direction of communication between brain regions is key to understanding brain function. Traditional methods to analyze brain activity based on the Wiener-Granger causality principle quantify the overall information propagated by neural activity between simultaneously recorded brain regions, but do not reveal the information flow about specific features of interest (such as sensory stimuli). Here, we develop a new information theoretic measure termed Feature-specific Information Transfer (FIT), quantifying how much information about a specific feature flows between two regions. FIT merges the Wiener-Granger causality principle with information-content specificity. We first derive FIT and prove analytically its key properties. We then illustrate and test them with simulations of neural activity, demonstrating that FIT identifies, within the total information flowing between regions, the information that is transmitted about specific features. We then analyze three neural datasets obtained with different recording methods, magneto- and electro-encephalography, and spiking activity, to demonstrate the ability of FIT to uncover the content and direction of information flow between brain regions beyond what can be discerned with traditional anaytical methods. FIT can improve our understanding of how brain regions communicate by uncovering previously hidden feature-specific information flow
    corecore